Is a naturalistic account of reason compatible with its objectivity?

How can rational objectivism be reconciled with my principles of naturalism?

Greg Detre

Monday, 5th November, 2001

Dr Tasioulas

 

 

Perhaps part of the problem with current conceptions of rationality is that they conflate a number of components, which together give rise to what we consider to be rationality. This is a specific application of a broad idea currently popular in cognitive science and philosophy of mind, which goes under many names, including modularity, the multiple drafts hypothesis (Dennett), the society of mind. I will adopt Dennett�s terminology of �multiple drafts�, used to refer to concurrent, restricted processes or modules, which interact, influence each other and compete for control of the system. At different times, different modules will dominate, allowing us to react flexibly in a variety of situations, and to literally �contain multitudes�[1].

Taxonomies of rationality

There are an enormous number of different ways in which we could try and divide up rationality, some resulting in similar taxonomies.

Evolutionary age

The most evolutionary ancient part of the brain is the hindbrain. Its broad design can be traced evolutionarily to our reptilian past. It links directly to the spinal cord, controlling basic functions like respiration, heartbeat etc. On top of this rests the �mammalian� mid-brain, at a slightly higher level. The highest level is the neocortex, a 6mm surface (in humans) that we share only with the most intelligent species like primates and dolphins. In conjunction with various sub-cortical areas, the cortex is responsible for all higher-level processing. It might be possible in the long-term future to try and separate the various roles played in reasoning by the different areas, though it seems likely that almost all of what we would really term rationality occurs in various parts of the cortex.

Forward/backward

There is a definite attractiveness to trying to divide our reasoning process into forwards and backwards reasoning.

By backward reasoning, I mean the process by which we assess the validity of arguments or consider retrospectively.

In constrast, forward reasoning concerns inference from premises, and any thinking where we are considering a potentially infinite number of unknown conclusions, trying to find the �right� one.

To illustrate the difference, consider the game of chess. When faced with a board position, whether an opening gambit or an endgame, we can safely assume that neither a human or (currently existing) computer player on the planet is able to consider all the available moves. However, with varying degrees of success (and subject to practice and understanding of the rules), a human can look at a chess board and choose a �best� move. This is forward reasoning. It is directed towards an ultimate goal, that of winning the game.

However, when we watch a chess game or consider the move that our opponent has made, we might try and decide why a given move was made. Given a close starting and finishing point, we try and establish why a move is good. We are evaluating a static situation. Indeed, whenever we consciously consider a move that has welled up out of our subconscious, we are using backwards reasoning.

When we talk of the number of moves that a chess-player actively considers as being a handful or perhaps a few tens, we are talking of the number that our faculty of backwards reasoning can consider. However, in order to narrow down the enormous space of possible moves in the first place to these few most appropriate ones, we use forward reasoning. Our brain explores an enormous possibility space unconsciously, presenting only the best few for careful consideration.

Descartes� method of doubt provides the supreme example of forward reasoning - having elected to suspend all belief, he has to reason forwards from nowhere. He argues that the Cogito is not a syllogism, but rather serves as a premise. In a way, he knows that he is trying to justify his beliefs in the world as it manifestly appears to us, but the routes available by which we might attempt this are genuinely numberless and cannot be wholly captured by any schema. This is partly an aspect of language � as long as our arguments are based in language, and we have no real alternative, the combinatorial nature of syntax (which is what gives it its huge expressive power) is such that we can construct a genuinely infinite number of sentence-propositions, although of course most of these would be wholly nonsensical.

The situation in philosophy then is very different to that of a chess game, where the available/allowed moves are restricted by definitely-describable and easily-applied rules. In philosophy the only restrictions are of grammar (propositions must be expressible as sentences) and of plausibility (the more believable, i.e. justified, a proposition seems to us, the better). This may present different challenges for our reasoning systems, and it may be that we are better at either forward or backward reasoning in this situation.

Linguistic/non-linguistic

There is a definite appeal to the idea that some of our reasoning is non-linguistic in some fashion. This can be taken in several ways:

1.      In opposition to the Language of Thought Hypothesis, the processing of the brain cannot be described in language-like syntactic manipulation of symbols � the results of such processing can usually (but perhaps not always) be expressed propositionally

2.     

Certainly, we can think of many situations in which we don't appear to be reasoning linguistically. For example, when faced with two similar but not identical pictures, we are usually able to point out the difference between them. However, it is at all obvious that this is a linguistic act of reasoning procedure. According to Nagel�s definition of reason as �accessing objectively valid truths� it qualifies as reasoning, but it seems implausible that our performance of the task is a linguistic operation. Rather, we seem to be performing the task, and only afterwards expressing the result propositionally.

Domains of rationality

domain in which you�re using it, ecological, e.g. theory of mind, mathematical, inferential �

Is it possible that the scientific and the logical domains of reasoning actually reflect fundamentally different rational processes?

Levels of rationality

If we could show that the demands placed on our reasoning are somehow less for naturalistic reasoning than for a priori or philosophical reasoning, then we could accept a naturalistic account of the limitations of reasoning, and avoid any attempts by philosophers to show that such arguments are self-refuting.

How can we understand this idea of levels of rationality? Is it simply that different domains of thought/discussion place greater or fewer demands on our intellect? That our brains just get them right more often? Or that they are somehow fundamentally, of their nature, easier to grasp, and their conclusions are easier to draw? Or even less satisfyingly, simply because we just have no choice but to take some things as given, otherwise our mental scaffold can never get off the (wholly skeptical) ground?

 

Cherniak � minimal rationality

Cherniak provides one approach to considering degrees of rationality. In particular, he is attacking an idealised, (more or less) all-or-nothing conception of rationality that authors like Dennett, Davidson, Quine and Cohen support.

He accuses Davidson in �Psychology as philosophy� of saying that we need a �large degree of consistency� but actually arguing for ideal consistency, as does Quine�s translation policy. Inconsistency can be very difficult to unmask, if the logical relations are convoluted, and the inconsistency implicit � also, we tend to compartmentalise our beliefs, only comparing beliefs within a subset.

He attacks Dennett�s claim that �as we uncover apparent irrationality under an Intentional interpretation of an entity, our grounds for ascribing any beliefs at all wanes� � Cherniak argues that this is not the case for above-minimally rational creatures. I�m not so sure � a just-above minimally rational creature might seem to hold only a skeleton few set of beliefs. It�s a matter of degree, really.

Cherniak is looking to explain why intentional explanations are so successful as a means of predicting and understanding others� behaviour. By intentional explanations, he refers to the attribution of a cognitive systems of beliefs, desires, perceptions etc. He wants to show that too weak a conception of rationality is insufficient to explain the success of these intentional explanations, while too strong a conception is unable to either, for different reasons, as well as being wholly inapplicable to human beings in the real world.

His �minimal general conditions for rationality� have to lie between what he characterises as the �assent theory of belief� and the �ideal conditions of rationality�. The assent theory of belief considers that:

An agent believes �all and only those statements which he would affirm�, i.e. that believing a proposition consists simply in having an accompanying �feeling of assent�

Almost anything goes in such a caricatured theory, since it places no inherent consistency constraints, and no system by which inferences can be drawn from a given set of beliefs. As a result, it is quite unable to explain the predictive success of assuming intentionality in other people, since an agent is free to hold any beliefs he chooses � or at least, there is no systematic way of predicting, deducing or explaining which beliefs such an agent would have.

At the opposite end of the spectrum, Cherniak characterises the ideal general rationality criterion as:

An ideally-rational agent with a particular belief-desire set would:

make all of the sound inferences from his belief set

undertake all actions which would, according to his beliefs, tend to satisfy his desires

eliminate all inconsistencies that arise in his belief-set

This can be weakened slightly by modifying �all actions� to �most actions�, or perhaps just, �some non-empty set of actions�. However, it leaves no room for �sloppiness�. Sloppiness in Cherniak�s sense is almost a technical term, encompassing all of the factors which undermine our deductive ability. These include: laziness or carelessness; the difficulty of the deduction to be made (i.e. whether it is convoluted, indirect, requiring numerous unrelated-seeming premises); cognitive limitations (e.g. short-term memory); time constraints; and so fundamentally, the �finitary predicament�. We have finite-sized brains, a finite time available to us, and so we are restricted in the number and range of inferences we can consider, let alone draw.

The reason that these idealisations are made is that they allow us to simplify to a manageable level human behaviour sufficiently to formalise it in disciplines which deal with an enormous mass of human interactions, like economics. However,

The minimal rationality conditions he sets out are:

A minimally-rational agent with a particular belief-desire set would:

make some, but not necessarily all of the sound inferences from his belief set

attempt some, but not necessarily all, of those actions which would, according to his beliefs, tend to satisfy his desires (termed �apparently appropriate actions�)

not attempt most (but not necessarily all) of the actions which are inappropriate given that belief-desire set (the corresponding �negative rationality� requirement)

eliminate some (but not necessarily all) inconsistencies that arise in his belief-set

He is particularly keen to attack the idea that an agent actually believes (or infers, or can infer) all consequences of his beliefs.

I don't think that any of the philosophers whom Cherniak accuses of idealising rationality would explicitly accept the premise couched in those terms. It is obvious that that would require infinite resources, since it would probably require analysing some belief-sentences that could not be stated, let alone understood, within the agent�s lifetime.

For instance, take the Goldbach conjecture. We have a set of axioms, a conjectured inference, and yet we are unable to tell whether the inference follows deductively. Appeals to more prosaic cognitive limitations like short-term memory, carelessness or simply failing to take into account relevant premises by accident cannot explain our failure. In one sense, the problem is simply that the space of possible mathematical proofs is far far too big for us to be able to search through it. This feels like a simple concession to Cherniak�s statement of our finitary predicament. But I think it concedes far more than that � because the space in which we operate on a daily basis when acting rationally is almost always far huger than we can possible search. And if this is the case, then we simply are not rational in the way that Nagel requires.

the burden is on Nagel to explain why we can't do the things that a perfectly rational being can do � this requires a naturalistic approach � in order to make his position intelligible, he has to explain why maths isn't trivial, rather than trying to appeal to our intuitions about our ability to appreciate a mathematical or logical truth from the content of the proposition alone

if rationality amounts to searching a space, then we aren't rational � what about if we employ special heuristics in some way???

We also know which reasoning tasks are more difficult for humans than others, i.e. a weighting of deductive tasks with respect to their feasibility for the reasoner, so that we can guess which inferences are easier and more likely to be drawn = the theory of feasible inferences. He leaves it as an open question whether the most �obvious� inferences (like modus ponens) could be performed by any creature that qualifies as having beliefs.

Theory of human memory structure � helps you know which beliefs will be recalled when, e.g. whether the premises and rules are active at the time of consdiering a belief/conclusion. Thus, the activated belief subset is subject to a more stringent inference condition than the inactive belief set. Of course, I think it would be an even more powerful theory if it was couched in connectionist terms of association, rather than discrete subsets.

 

In determining whether a person ought to make a given inference in order to be pragmatically rational, you need to take into acount: the soundness of the inference; its feasibility; its apparent usefulness according to the person�s beliefs and desires.

Cherniak skips over the difference between conscious and unconscious inferences, and explicitly makes the assumption that our entire belief-desire system can be expressed a finite set of (logically-interpretable) sentences.

Others have taken a similar approach, notably Simon and Goldman.

 

Other taxonomies

adaptive value

amount of time we spend on it

context dependence

practical/theoretical � is this the same as instrumental vs what(???)???

choosing premises vs arguing from those premises

physical and abstract

 

Conclusion

My stated intention at the start of this paper was to investigate how easily a naturalistic framework and rational objectivism can accommodate each other. I was hoping and expecting to find that they were incompatible in certain fundamental, ineradicable ways. Given that I feel that we are far better scientists than philosophers, this would have further persuaded me that the reason we disagree on almost all non-empirical issues is that we are not sufficiently rational or powerful thinkers to make real headway in such areas. This would not necessarily be to dismiss out of hand the entire philosophical enterprise, but it would undermine it in those areas where there is no support from other disciplines to provide arbitration in disputes.

As it has turned out, I have found even a restrictive, contemporary naturalistic account to be surprisingly pliable with respect to our rational capacities.

 

Questions

nietzsche quote in kinds of minds

what does Descartes mean when he says that the Cogito is not a syllogism???

what is the difference between evolutionary discussions and connectionist discussions???

evolution is about why humans as a species might/might not be rational, and connectionism is about how whether what we know about ourselves tells us about the extent of our rational capacities

to say that something can be expressed propositionally is to say that it can be expressed a tru-or-false sentence, i.e. linguistically, right???

to what extent are we searching a space � connectionist question

is it true to say that long-term disagreements arise more or less only in non-empirical discussions???

intuitive thinking??? association etc. in my �do machines think?� essay???

better vs faster intelligence??? see �Coding a transhuman AI�

do you frame or draw an inference???

Simon�s �bounded rationality�???

 



[1] Walt Whitman, �Song Of Myself�:

Do I contradict myself?

Very well then I contradict myself,

(I am large, I contain multitudes.)